30 research outputs found

    How I won the "Chess Ratings - Elo vs the Rest of the World" Competition

    Full text link
    This article discusses in detail the rating system that won the kaggle competition "Chess Ratings: Elo vs the rest of the world". The competition provided a historical dataset of outcomes for chess games, and aimed to discover whether novel approaches can predict the outcomes of future games, more accurately than the well-known Elo rating system. The winning rating system, called Elo++ in the rest of the article, builds upon the Elo rating system. Like Elo, Elo++ uses a single rating per player and predicts the outcome of a game, by using a logistic curve over the difference in ratings of the players. The major component of Elo++ is a regularization technique that avoids overfitting these ratings. The dataset of chess games and outcomes is relatively small and one has to be careful not to draw "too many conclusions" out of the limited data. Many approaches tested in the competition showed signs of such an overfitting. The leader-board was dominated by attempts that did a very good job on a small test dataset, but couldn't generalize well on the private hold-out dataset. The Elo++ regularization takes into account the number of games per player, the recency of these games and the ratings of the opponents. Finally, Elo++ employs a stochastic gradient descent scheme for training the ratings, and uses only two global parameters (white's advantage and regularization constant) that are optimized using cross-validation

    The Dwarf Data Cube Eliminates the Highy Dimensionality Curse

    Get PDF
    The data cube operator encapsulates all possible groupings of a data set and has proved to be an invaluable tool in analyzing vast amounts of data. However its apparent exponential complexity has significantly limited its applicability to low dimensional datasets. Recently the idea of the dwarf data cube model was introduced, and showed that high-dimensional ``dwarf data cubes'' are orders of magnitudes smaller in size than the original data cubes even when they calculate and store every possible aggregation with 100\% precision. In this paper we present a surprising analytical result proving that the size of dwarf cubes grows polynomially with the dimensionality of the data set and, therefore, a full data cube at 100% precision is not inherently cursed by high dimensionality. This striking result of polynomial complexity reformulates the context of cube management and redefines most of the problems associated with data-warehousing and On-Line Analytical Processing. We also develop an efficient algorithm for estimating the size of dwarf data cubes before actually computing them. Finally, we complement our analytical approach with an experimental evaluation using real and synthetic data sets, and demonstrate our results. UMIACS-TR-2003-12

    Shared Index Scans For Data Warehouses

    Get PDF
    . In this paper we propose a new \transcurrent execution model" (TEM) for concurrent user queries against tree indexes. Our model exploits intra-parallelism of the index scan and dynamically decomposes each query into a set of disjoint \query patches". TEM integrates the ideas of prefetching and shared scans in a new framework, suitable for dynamic multi-user environments. It supports time constraints in the scheduling of these patches and introduces the notion of data ow for achieving a steady progress of all queries. Our experiments demonstrate that the transcurrent query execution results in high locality of I/O which in turn translates to performance benets in terms of query execution time, buer hit ratio and disk throughput. These benets increase as the workload in the warehouse increases and oer a scalable solution to the I/O problem of data warehouses.

    The polynomial complexity of fully materialized coalesced cubes

    No full text
    The data cube operator encapsulates all possible groupings of a data set and has proved to be an invaluable tool in analyzing vast amounts of data. However its apparent exponential complexity has significantly limited its applicability to low dimensional datasets. Recently the idea of the coalesced cube was introduced, and showed that high-dimensional coalesced cubes are orders of magnitudes smaller in size than the original data cubes even when they calculate and store every possible aggregation with 100 % precision. In this paper we present an analytical framework for estimating the size of coalesced cubes. By using this framework on uniform coalesced cubes we show that their size and the required computation time scales polynomially with the dimensionality of the data set and, therefore, a full data cube at 100 % precision is not inherently cursed by high dimensionality. Additionally, we show that such coalesced cubes scale polynomially (and close to linearly) with the number of tuples on the dataset. We were also able to develop an efficient algorithm for estimating the size of coalesced cubes before actually computing them, based only on metadata about the cubes. Finally, we complement ou

    The Dwarf Data Cube Eliminates the High Dimensionality Curse

    No full text
    The data cube operator encapsulates all possible groupings of a data set and has proved to be an invaluable tool in analyzing vast amounts of data. However its apparent exponential complexity has significantly limited its applicability to low dimensional datasets. Recently the idea of the dwarf data cube model was introduced, and showed that highdimensional "dwarf data cubes" are orders of magnitudes smaller in size than the original data cubes even when they calculate and store every possible aggregation with 100% precision. In thi

    The active MultiSync controller of the cubetree storage organization

    No full text

    Dwarf: Shrinking the PetaCube

    Get PDF
    Dwarf is a highly compressed structure for computing, storing, and querying data cubes. Dwarf identifies prefix and suffix structural redundancies and factors them out by coalescing their store. Prefix redundancy is high on dense areas of cubes but suffix redundancy is significantly higher for sparse areas. Putting the two together fuses the exponential sizes of high dimensional full cubes into a dramatically condensed data structure. The elimination of suffix redundancy has an equally dramatic reduction in the computation of the cube because recomputation of the redundant suffixes is avoided. This effect is multiplied in the presence of correlation amongst attributes in the cube. A Petabyte 25-dimensional cube was shrunk this way to a 2.3GB Dwarf Cube, in less than 20 minutes, a 1:400000 storage reduction ratio. Still, Dwarf provides 100% precision on cube queries and is a self-sufficient structure which requires no access to the fact table. What makes Dwarf practical is the automatic discovery, in a single pass over the fact table, of the prefix and suffix redundancies without user involvement or knowledge of the value distributions

    Hierarchical Dwarfs for the Rollup Cube

    No full text
    The data cube operator exemplifies two of the most important aspects of OLAP queries: aggregation and dimension hierarchies. In earlier work we presented Dwarf, a highly compressed and clustered structure for creating, storing and indexing data cubes. Dwarf is a complete architecture that supports queries and updates, while also including a tunable granularity parameter that controls the amount of materialization performed. However, it does not directly support dimension hierarchies. Rollup and drilldown queries on dimension hierarchies that naturally arise in OLAP need to be handled externally and are, thus, very costly. In this paper we present extensions to the Dwarf architecture for incorporating rollup data cubes, i.e. cubes with hierarchical dimensions. We show that the extended Hierarchical Dwarf retains all its advantages both in terms of creation time and space while being able to directly and efficiently support aggregate queries on every level of a dimension's hierarchy
    corecore